Can PAC learning algorithms tolerate random attribute noise?
نویسندگان
چکیده
منابع مشابه
PAC Learning with Nasty Noise
We introduce a new model for learning in the presence of noise, which we call the Nasty Noise model. This model generalizes previously considered models of learning with noise. The learning process in this model, which is a variant of the PAC model, proceeds as follows: Suppose that the learning algorithm during its execution asks for m examples. The examples that the algorithm gets are generat...
متن کاملPAC-Learning with General Class Noise Models
We introduce a framework for class noise, in which most of the known class noise models for the PAC setting can be formulated. Within this framework, we study properties of noise models that enable learning of concept classes of finite VC-dimension with the Empirical Risk Minimization (ERM) strategy. We introduce simple noise models for which classical ERM is not successful. Aiming at a more ge...
متن کاملPac-Learning Recursive Logic Programs: Efficient Algorithms
We present algorithms that learn certain classes of function-free recursive logic programs in polynomial time from equivalence queries. In particular, we show that a single k-ary recursive constant-depth determinate clause is learnable. Two-clause programs consisting of one learnable recursive clause and one constant-depth determinate non-recursive clause are also learnable, if an additional \b...
متن کاملPac-learning Recursive Logic Programs: Eecient Algorithms
We present algorithms that learn certain classes of function-free recursive logic programs in polynomial time from equivalence queries. In particular, we show that a single k-ary recursive constant-depth determinate clause is learnable. Two-clause programs consisting of one learnable recursive clause and one constant-depth determinate non-recursive clause are also learnable, if an additional \b...
متن کاملToward Attribute Efficient Learning Algorithms
We make progress on two important problems regarding attribute efficient learnability. First, we give an algorithm for learning decision lists of length k over n variables using 2 ) logn examples and time n 1/3). This is the first algorithm for learning decision lists that has both subexponential sample complexity and subexponential running time in the relevant parameters. Our approach establis...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Algorithmica
سال: 1995
ISSN: 0178-4617,1432-0541
DOI: 10.1007/bf01300374